Wednesday, December 30, 2009

Computer Language Jungles

I've noticed that the number of computer languages has grown exponentially since I first learned one back in about 1969. I learned FORTRAN in college, barely, and following that, IBM360 BAL (basic assembly language). Then COBOL.

I have taken a spoon and scooped out the part of my brain that learned those 3 languages very long ago.

However, I cannot even begin to list all the compiler languages, scripting programs, meta-languages and implementations of lambda calculus with various cryptic names that have since exploded into existence. I wonder, why this has to be?

It seems that once I learned the 4th or 5th assembly language (finally for 64 bit Intel) that I've had it with those. Also, way back when there were only a few flavors of C (always bastardized for some specific company like Microsoft, DEC, etc.), I could have just settled with C. In fact I prefer C even today. Concise, no bullshit, fast.

But, no. There had to be every Tom, Dick and Harry's version of re-implemented-smalltalk-like-pascal-superscript-lisp-forth-C++. In addition to many C++ versions there is C# (and 3 different ones, too!). There are so many languages I can only refer to Wikipedia for a relatively comprehensive list of "famous" languages.

There are, unfortunately, many "unfamous" languages that were designed in-house for various corporations or even for specific departments within corporations, or even for specific people. The reason for this is not clear, but I would bet that it has to do with the phenomena that it is easier for some people to learn something by re-inventing it.

I think there is also another factor, one that is very apparent with JavaScript and various networking languages -- security. The only way security can be maintained is to write a new language that implements whatever essential subset of instructions or functions must be there, and leaves out the unneeded or dangerous ones. Thus there are dozens of half-JavaScripts out there that implement everything except the ability to actually use them in real web pages. That way there are no security problems.

This is similar in nature to JavaScript itself (which is a misnomer anyway -- it is EcmaScript, which sounds much like a skin rash to my ears) which leaves out all the file I/O, process controls and other things that can render a virus or worm in the networked computers. Despite this, various viruses and worms were written in JavaScript anyway.

So, there are all kinds of languages -- at least one for every 2 or 3 programmers. I've written my own languages (3 in total), although I would never expect them to still exist after all these years. They were specific to certain manufactured robots anyway, so they would not be useful for anything more general. One was called ZMAT, which is all I need to remember about it.

I have also written 3 operating systems. One was for a z8000 system (ZOS), another was for a z80000 system (ZMS), and another was for a graphics processing board (GAS). But all of those have since died merciful deaths. Like most computer hardware systems, they had short life spans, so the operating systems that ran them also had short life spans.

I hope to never write another OS or another language. But even though C is just fine with me, or even C++ if it needs to have formality and STL, it seems that for the rest of my life (probably not excessively long, now) I might still have to use dozens of other languages.

In the world of WPF (within the world of Visual Studios version whatever++) there are many subset languages, mostly C# and XAML. In other systems there are equivalent systems, although done in C++ and XML or Java and XML. I could just shoot C# in the head. It has nothing on Java (and neither does Java smell that nice), so it seems merely another Microsoft thing to jam their programming environments down our throats.

Oh, well. It's a living. Oh, wait! It's not even a living. Nobody wants to pay programmers anymore. Unless they live in India. Yet another batch of languages.

Tuesday, December 22, 2009

Disbeliever

All my life, from childhood to the present, I have been disparaged as a "nonbeliever". I would rather be labeled a "disbeliever" however, since that is closer to what I really am.

A non-believer is someone who believes nothing. That is probably not possible for a human brain. The brain will tilt one way or the other, regardless of any truth in that direction or not. A disbeliever is someone who chooses to NOT believe in something that is being forced down their throat. That is me.

In many of my posts I've blithered endlessly about theories of various kinds and it is unlikely that any further blithering is necessary. Suffice that there are plenty of scientifically proven subjects which, if studied closely, are very difficult to disprove. Yet there are many people who disbelieve them. They choose to NOT believe things which are already proven beyond any doubt.

I am not like that. I will not kick a brick, because I know that my toe is less substantial than the brick. No amount of "disbelief" in bricks will change that fact. My disbelief is in things offered as opinion.

Ad hominem attacks are the most common form of propaganda. It is very easy to destroy someone's reputation by the copious use of slander. Given any name, whichever side of any "belief" that person falls into, some kind of belittlement can be used in the hopes of changing someone's opinion of them.

I will not use any example, because that is exactly what is so slimy about the subject -- the example cannot be made in the abstract. It can only be made in such a way to disparage someone.

So, my attack is on the use of propaganda itself. To use riches to buy up as many channels of information broadcast as possible and then blither on about whatever political crap or religious drivel -- that is what is despicable. So, if you do this, you know who you are and what you are.

But why am I a disbeliever? Because, other than kicking bricks and other physically provable situations, there is nothing which can be trusted. There are so many liars, marketing pukes, political slanderers and reputation destroyers out there that the residual "believable" information is too tiny to notice. It doesn't matter what "ism" you push, you are just a pusher -- like a drug pusher.

Someone might point at this article as an example of disparagement, and they would be right. It is very difficult to write anything meta-political without stepping on somebody's toes. But I don't mind stepping on the toes of frauds, thieves or other kinds of crooks. And, unfortunately, that is the most common form of propagandist out there.

What about religion? Is that not a "non-evil" form of propaganda? In some ways I agree with that. Certainly the pleadings for people to discard their selfish, pompous ways and live their lives according to some standard values for the greater good -- can't be bad. I think religions get kind of out of control, though. Some wind up just being another kind of greedy corporate entity, with an endless supply of suckers to bilk. So, I "disbelieve" in religion just the same as the rest.

Am I not then "throwing the baby out with the bathwater?" Not entirely, because I still believe in the reality of physics. Solipsism is not exactly my bag. I think reality happens to me, I don't create reality as I go along. There is no opinion, such as saying that "materialism is so nerdy", that can disparage my belief that kicking bricks will hurt my toe. So I do not extend "disbelief" to obvious, measurable phenomena.

Does this mean I "don't believe in aliens?" Well, that is a bit difficult to swallow without actually being confronted by a real, nuts-and-bolts alien. But the "possibility" of aliens is still fairly high, somewhere out there in the vast numbers of stars. But believing in aliens and believing in doctrines of opinion are completely separate things. It is far more likely that both Democrats and Republicans are equally polluted by corporate influence than "aliens are visiting the Earth."


Thursday, December 17, 2009

Working Stiffs

The term "working stiff" applies to all of us humans that actually get up every morning and start doing something in the attempt to make a small sum of money. Such people as gardeners, nannies, grocery store cashiers, computer programmers, janitors, teachers, cooks, nurses, doctors, mechanics, technicians and several thousand other kinds of workers -- all fall into this category.

There are other people, for instance the CEO of Exxon, who probably get up in the morning and also start doing something, however they do it for vast sums of money and are never referred to as "working stiffs".

I have noticed that the GOP, the right wing party in America, has become quite incensed with any hint of socialism that might creep into our government. I'm not sure they understand the difference between socialism and totalitarianism (which is closer to GOP thinking than socialism itself). The fear of socialism would be quite understandable if you are the CEO of Exxon, or perhaps the King of Texas (I mean, the Governor), because socialism means that they might be forced to accept less money for their stupendously superior amounts of work compared to say, a doctor.

Somehow, with the marvelous information propagation capabilities heretofore reserved for such greats as Goebbels or Stalin, the GOP has managed to make every gas station attendant, waitress and stock yard worker in America take up pitchforks against them damn socialists, wherever they are. This is because... well, I don't exactly know why.

Why would a waitress that makes $2.30 per hour, plus maybe $10 or $20 in tips, feel so incensed about taxes that they don't even pay that they would stick a needle in your eye for voting in favor of increasing their chances of getting quality health care someday? Why do working stiffs actually desired getting stiffed by the Insurance Industry? Why do "nonworking" working stiffs that can't even get a job desire for any possible job they might get to be outsourced to China?

I can at least understand the hatred some GOPs have for other races. I don't like it, but I understand it. I understand the hatred they have for other religions, again something I'm not especially on board with. They have a whole drawer full of stock hate-symbols to drag out for every occasion. I am not exactly sure why all the working class heroes are so dead set against worker's rights, minimum standards of health care, education and so forth. It's like wanting to own guns in order to shoot yourself in the feet.

Why are the GOPs so effective in making working stiffs desperate to screw up their own lives?

Propaganda on mass media. It works. Ask any preacher, advertising executive or producer for You-Know-Who News. They know it works and the GOP knows it works. And us working stiffs know it works-- so stop falling for it. Think for yourself.

Wednesday, December 9, 2009

Believe me, believe me not...

What makes a person decide to believe something? It seems very unlikely that people will just believe 50% of everything they are told is "true" and then disbelieve the other 50%. And there is also the "pollster" effect -- that people are more likely to say they believe something if it is expected of them within the context of a question.
Louis XIV visiting the  Académie des sciences ...Likewise, if I suddenly found myself surrounded by whooping and hollering painted savages who revere the great stone Volcano God, I doubt that I would just argue about the fine points of it all, or point out how silly such a thing is. I would probably just whoop and holler along with them, in fear that I might be found to "believe" the wrong thing.

Yet, in any of these instances the belief is a public thing -- a social thing. It is not necessarily what you believe regardless of any peer pressure. If you are in a cowboy bar, I doubt you are going to shout anything about why everyone is wearing those big hats and buckles. Nobody wants to get beat up over inconsequential things, so you just go along with it.

Science is my "thing", but it is a very dangerous thing for all the above reasons and more. Cowboys in the late 1800s, confronted with the scientific fact that their longhorn cows were spreading diseases would just as soon shoot you as listen to reason. That was their livelihood you were messin' with.

In the current day, oil and coal is the big thing and it is spreading a lot of death around, although mostly in less obvious ways. Most people don't care if this or that microbe in the ocean is dying off. Or even that coral reefs are dying off. Just so long as they can drive to work everyday and heat the house in the winter -- who cares about those bugs and slimy things anyway?

So, it is very popular nowadays to belittle scientists as being "blinded by science", or greedy for "research funding" or whatever. If you think greed is the issue, you should try being a scientist. They generally aren't rich people. And being blinded by a whole bunch of measurements and collections of facts is hardly worse than checking how much you pay for those shoes and making sure you're pants fit before you buy them.

There are also scientists who somewhat smugly ridicule "lesser educated folks" and try to push their objectivity theories as a kind of superior religion which should supplant all inferior belief systems as a matter of course. Whereas there may be good reason to push away "snake oil" religions and the harmful practices of savage rituals, there is hardly any reason to worship nuclear explosions as being more holy than anything else, either.

But you must ask yourself, in a world filled with exceptionally large amounts of discarded chemicals and nuclear waste -- do you believe that magical fairies will fix it all?

Wednesday, November 25, 2009

Lying about Climate Change

There were recent emails stolen from servers which indicate that many climate researchers from several UK and US universities and research institutions were attempting to cover up any evidence that the world's temperatures were not "rising" in step with the predicted global warming that has been so controversial lately.

I am not a climatologist although I understand the scientific method perfectly well. I could never claim one way or another anything about climate change with any authority, yet I do understand sensors, computers, statistics and so forth, as it applies to just about any subject in science and engineering. It is a lowly thing to do to "fix" the data to meet predetermined expectations. That is little better than calling dinosaurs bones "a joke played by God" in order to disprove evolution theory.

I think there are enough other problems with the world's environment, however, and since the very people who are so quick to cry hoax over these problems with climate change science are also many times the very people who generate vast profits from the unfettered pollution of the planet by their industries.

However, there are other problems with climate science -- on both sides of the aisle. There are not sensors in every conceivable location in the oceans and atmosphere, nor could there ever be, so the actual "temperature" of the Earth is pretty much guess work. There are some places that heat up and some places that cool down.

All in all the average temperature doesn't change very much, although it is certainly different now than for many of the last few centuries. This slow, agonizingly gradual change in the climate gives either side plenty of elbow room when trying to force their view of the situation upon a confused public.

But the measurements of chemicals such as chlorine compounds -- many of them highly catalytic and problematic for nearly all lifeforms -- are undeniably higher almost everywhere you measure than even just a few decades ago. There are so many man-made chemicals poured into the environment that no scientific means can be used to predict what they will do -- only time will tell. From what studies there have been, the results are not happy. From fish eggs to cow bones and squid swarms, the world is changing.

It could be that all of those chemicals become broken down by various newly evolved bacteria or other dynamic lifeforms. It could also be that those chemicals kill off more lifeforms than can evolve and adapt to them. There is little doubt that they have an effect, and usually a very negative effect upon our world.

The radioactive substances alone are profoundly dangerous and some countries have terrible proliferation controls. It is bad enough just here in the USA where we are paranoid about that kind of stuff. In some countries the ignorant masses are exposed to any number of radioactive substances by merely rummaging through trash or in war-torn places from the left-over depleted uranium dust our anti-tank weapons leave behind.

The catalytic compounds are many and cover a lot of ground. The bromine and fluorine catalysts are less widespread than chlorine, but they have very similar effects wherever they accumulate. Chlorine breaks down our ozone layers and does not become consumed by the chemical reactions, so it is free to break down more ozone for decades before it diffuses back to harmless levels -- if it ever will be reduced in output. Chlorine is even a problem when confined as a salt, where it renders water electrolytic and corrosive.

Dioxins and heavy metals like lead, mercury and arsenic, minerals like asbestos and chemicals like formaldehyde are released as manufacturing companies produce them, or when they try to discard them but have nowhere to put them. These and many more chemicals effect living organisms in bad ways and often times we live and breathe them in our own homes continuously.

Climate Change, as a religion, never was the true bogey man some claimed it was, although I do not doubt that something is happening to the ice caps and glaciers. It could truly be a problem, no doubt, if the ocean levels are effected, if the tundra melts, if methane is released in massive quantities into our air. But the truth or falsity of that religion means very little if the obvious toxic wastes of human civilization are allowed to keep accumulating without regulation.

No one has proven that climate change is false. They have only proved that scientists can be dishonest, which is little different than businessmen or people in general. But that is exactly what the people who make the vast quantities of wealth from pollution-producing industries want us to believe -- that climate change is false, so we should therefore do nothing about any pollution. That is called "The Straw Man" distraction, and mostly fools will fall for it.

But there are plenty of fools, and many dishonest people to take advantage of them.

Thursday, October 15, 2009

Andoids Galore





I recently designed some Android Apps for the Google Phone. I wasn't sure what to make, exactly, since it seems like there is an application for every conceivable concept ever thunk. These are just a sampling of what I've done, since these are the ones I actually published. I have others that are just experimental, or require hardware that isn't always available.

The first one shown is actually not done yet, the Robot Evolver. It will be a good one, I think. But it is very difficult to design and write for all the variations of device, and to make all the body parts of the evolving robots harmonize correctly.

One I made is something called the "BibleCodeX" A-Z Bible Code Sequencer. It takes the entire King James Bible as a linear row of letters from the beginning of Genesis to the end of Revelation.

It was actually a lot harder than I thought it would be, mostly due to translating my usual C++ kind of thinking into Android's Java language. My other Java coding was less troublesome because it ran on larger Linux machines without the processing limits that a hand held device would have.

Another (for FREE of course) is just the Android Logo with a beating heart. I shouldn't charge for things that are just trademarks belonging to others.

The Deer Hunting Clock is much harder than you'd think. Mainly it is because of the great detail in the clock photo, which causes great memory and time to process in the Android phone. There is another version for 480x854 sized screens, but it consumes so much memory it is unsuitable for sale.

Anyway, I'll keep on trucking, and put some more apps out there, whatever I can think of and have some utilitarian purpose, I guess.

Tuesday, October 6, 2009

Unlucky California

When I first came out here about 43 years ago, it was a happening place. I was just joined the Navy at age 17. A few years later I got out of the Navy and was then wandering around in a daze, wondering what I would do with my life.

After having traveled and lived in other states, especially in the Midwest, I again wandered back to California. It was the only place that I seemed to fit in. I was a kind of half-hippie, half-beatnik, half-science-geek. Whatever I was, it fit in here better than Texas or Nebraska, that’s for sure.

In time, after a rocky start with hard labor jobs, I was able to get into computer shops and eventually became a systems programmer, and then a computer scientist. I designed operating systems, control systems for robots, and myriad other contraptions used in automated manufacturing.

Whatever, those days were “the good old days” in my life at least. And those days are gone. No longer are there any golden opportunities in California. Until this economic collapse is repaired, and some kind of jobs return to this state – I am screwed.

At age 60, I am too old to work as a hard laborer now. I can’t just wander around in jeans and a tee-shirt looking for day labor or mural painting jobs like I did at age 20. Having worked in the computer field so long, it would seem that I’d have an ironclad secure job. WPF program But that is not the case.

Anyway, I will not give up. I will keep trying. I keep up with the younger crowd, and try all the new tools, just to keep “hip”. Although .NET, Android, Java and other modern programming environments help to standardize around the “Web 2.0” phenomena, I can’t help but think they are just as temporary as C and C++ were as my tools of the trade for over 20 years.

Still, I will keep going until my brain falls out or my hands crumble, even if I have to push a cart down the street looking for beer and soda cans in between.

Sunday, September 6, 2009

The Slimmest of All

I just wondered what this was about. It is made with “Live Writer” which I downloaded while on Windows 7. It is a very slim document editor. 

These are pictures of my grand daughters. They are about a year older now, but this picture was handy. I have discovered that the pictures are static, and the text doesn’t flow around them like a web page.

the girls November 2008 041

Oh, well. So let’s see if this blog thing happens. Well – I was wrong about the text wrapping. I just fixed it so it does. Just have to read the manual more often.

It also updated my blog right handily as well. So this isn’t so bad at all.

The world isn’t flat, and the world of Redmond isn’t always stuck in 1979.

Friday, September 4, 2009

Wpf and Silverlight Programming

Click on the images for larger versions.



These are WPF program and Silverlight programs with lots of moving objects, beating heart, and so forth. They can play movies, display other background images in differing opacities, etc. There are many things that aren't apparent by just looking at this picture. WPF mainly runs right on the metal and Silverlight runs as a web page.

These products have been around in varying levels of quality for several years, but only lately have they seemed worth bothering with, especially now that Windows 7 is around. These were created on Windows 7 (a developer's version) using vs2008 with lots of sp1 and other updates. I did not use Expressions Blend, although it might have been much easier for parts of the animations to do so.

I haven't posted for a long while since I've been busy trying to create these. I'd rather use Linux for everything, (which I'm doing right now to type this), however the world is still 90% Microsoft.

There are very few choices for jobs, so I have to do whatever there is, and if that's using WPF and related tools, so be it. It is very difficult to learn, however, so I'm just now figuring out how to do animations and all that, and the bizarre separation of WPF and Silverlight is unfortunate and annoying. They should both do the same things with the same syntax if it is something that is otherwise identically rendered.

Anyway, as time goes on, I hope to have some stuff that is less goofing around and more useful. Maybe I'll make my own web page using Silverlight, just for grins.

Other artwork I do on computers:



Wednesday, August 19, 2009

Analog Machines

For many years I used to imagine building an analog computer. Unlike the binary system we are so committed to nowadays I considered the issues of ranges of values other than 1 and 0. Now, those numbers can just be arbitrary smooth fractions between 1 and 0, or just indefinite numbers of unsigned integers, either of which have however many digits they need. The computer would be made "somehow", even if only of "unobtainium." A value could be a voltage, or a frequency, or a shape of a waveform. Some kind of signal.

In an analog computer the instructions would be made of signals. The signals are entirely analog. There are only as many "pieces" of the signal as is deemed necessary by the receiver of the signal. For instance an "ear" would be sensitive to the low end of the dynamic range. A radio would be sensitive to a large part of the middle ranges. There are also digital inputs that come from the outside and come from the inside. However they are really just analog signals shaped in such a way to approximate the square waves sufficiently to be interpreted as a digital signal. These could be as high or low of frequency needed, up to a certain practical limit (super gamma X ray frequency would not be safe for use in any kind of computing device.)

This computer must be capable of storing and retrieving signals. One such storage would be a light beam reflecting inside one of many tubes of mirrors. Electronically it could be voltage amplitudes bouncing between two identical feedback amplifiers. They only increase the output amplitude a fractional amount in order to make up for propagation entropy or attenuation. Time lapse would occur, however, such that the time cost equals one echo per connection. Such propagation effects already exist in current machines. The goal is to minimize time delay, however, so that reality can be "modeled" or interacted with in as realistic a manner as possible.

However the action is done, whether it be an optical means, using light beams and mirrors, etc. or if using cascades of transistors, capacitors and inductors, the sources of electrical signal would not be so much high speed all the time, just very accurately handled, regardless of frequency, whether many very very slow signals are used (allowing interim multiprocessing), or very tightly synchronized ultra high speed signals that test the fastest limits of the hardware.

For computational instance, if I wanted to get the square root of a number, I would put 25 volts in one hole of "the square rooter circuit" and the other hole would instantly output 5 volts. Only the reaction time of some transistor or other component is important, and they are generally quite fast, almost to the point of taking near 0 time. If a component was bypassed "to save time," there would still be some time used by electrical conductivity in the wire, however nearly 0 it was. Other mapping functions could be that perhaps X volts input equals Y(f(X)) volts output, or whatever. There are far more possible instances of specific electronic mathematics that can be integrated within the flexible fabric of electronic connectivity. There are too many to enumerate since they cover almost everything in reality and everything that has never happened yet.

The single input and single output is only a simplification, and there may be many inputs and many outputs for any given "thing which does something" in the machine.

You don't think of radio as being a "batch job" that takes all day to process, like baking pastries. If pastry baking was like radio they would take very little time or energy to bake. It takes little time for the input to a radio transmitter to be sent and received simultaneously by multiple radio receivers at long distances and at a vast number of locations. It would be as if as soon as you shoveled eggs, flour, sugar and milk into the input pipe, fully cooked cupcakes would instantly flutter from the output pipe in somebody else's living room. Computers made in such a way would be far faster than todays linear step-by-step digital computers. (I don't think cupcakes will ever be quite so convenient this way...)

As time goes on, various electronic component designs are made at in ever increasing numbers. There will always be far many more ideas than actual implementations because of the need for real wires and real components. Talk is cheap but pickles cost nickels. Yet, over time, these accumulated parts have been segmented, cataloged and stored in a vast, human maintained memory, such as in all the engineering file cabinets and databases in all the companies in the world that make such things.

There are fields within fields of connections possible between devices, and some are like ponds and rivers of undulating liquids with millions of separate sources and sinks for the voltages.

A huge amount of things would not even have names, such as nobody has a name for every grain of sand. A Z-transistor with a K-capacitor and an I-coil might have a name in a database, such as Part Number ZKI-123. A computer which manipulated or used such devices might even create them on the fly, only momentarily hooking up parts X, M and P for a microsecond, never to be used again, because the conditions necessary to use that combination never occurs again.

Flexible machines of this sort, which could synthesize parts on a need-be basis, could also evolve solutions to problems never encountered before. Much like human immune cells which can recognize things that are NOT correct in the body, a machine could recognize when nothing in its current configuration is suitable for whatever current problem occurs. It would counter the problem with a shaped electrical field and collection of molecules made ideally for the situation, just as water in gravity field will fill every nook and cranny of a cup it is poured into. The water does not "think" about the shape, it just automatically assumes that shape. Auto-pilot in airplanes has some similarity to this effect.

An analog computer could be made from mirrors, light emitters and light sensors, hopefully of very high reaction speed. A device made within spheres within spheres, with mirrored surfaces, can emit light signals on one side of the sphere and sensors on the far sides could pick up the reflections, with no wiring necessary. The "individuality" of signals would not be in discrete wires, but in discrete colors of the spectrum, as many as are needed all the way down to Planck's wavelength (theoretically). Also, signals could be serial and monochromatic and use only encoded "from-to" addressing to separate one signal from another, similar to how Ethernet works -- certainly not as fast as discrete, private spectral subdivisions could be. The parts which synthesize other physical stuff, such as unique molecules, are more chemical and nano-mechanical in nature, however the brains for such things would be in the analog computer itself.

Powering such devices might occur at the "heat" level. The heat radiating from the planet passes through everything on its way out -- sort of a photon wind. Those photons and the resulting molecular motion within crystals could then produce controlled oscillations which act as tiny "generators" and "motors" in our little machines. Of course there may be limits, and the entropy may be such that it actually damages machines instead of powering them! The implementation details are left to more qualified minds -- I am only thinking out loud. Perhaps if such things were possible, living systems would have already stumbled upon them.

Nevertheless, I can imagine, on some far distant world, after billions of years of biological evolution had run its course, that what we now consider "machines" would evolve in this way. What we consider nano-tech in our vernacular would be the DNA of their evolution. I can't imagine what the end result might be, nor whether it would be good or bad. I suppose there are many bad science fiction stories that could be made about such things. But, regardless of whether God created our own evolution, there is nothing, so far, to indicate that any other kind of evolution would thereby be prohibited.

Thursday, August 13, 2009

Cave Men

There are many things that bother me about the world I live in. In America we still have a large number of bigots, racists, hate groups, militias, gangs and other throngs of violent or verbally abusive people. This is not the only country with such entities, and perhaps not even the worst.

However, in America we like to claim that we are superior, that we have a moral high ground, that we promote "the greatest good" in the world. Whenever I see those words I grimace. I always hope that such things will be true someday, for the sake of my grandchildren.

Today, in the "brave new world" of the 21st century, we still live like tribal savages from the time before the Pyramids. We have barely evolved, if at all, from the cave-men caricatures of our ancestry. In fact, if not for a continuous monitoring of the populace, including a monitoring of the monitors themselves -- our civilization would collapse into a Mad Max world of savages. Racism and bigotry would be the major forces in the world. Intellect and civility would be overrun like Easter bonnets in a cattle stampede.

So, just think about the world you are creating, and what you will leave for your children and grandchildren. If you are just teaching them bigotry, then you are destroying their future.

Tuesday, August 11, 2009

Virtuoso

Recently I have tried using "virtual computing" and I'm very impressed really. Although I have tried to do this before, using a computer that wasn't quite up to the specifications needed to handle it, this time it has been far more successful.

I am using an Alienware computer with 6gb of memory and an Intel i7 processor with 8 threads, so the horsepower and memory are already there, this time around.

Using a free, for-home-usage product called VmPlayer (a tool in the VmWare product line), and another tool from a web page (known as EZY-VMX) I was able to generate several virtual machines to run on a single computer. I tried a few different arrangements, including using multiprocessors, large memory space to single processor, small memory space, and then running virtual systems using Linux.

So first I made a system for Ubuntu 8.04 (using an existing Ubuntu boot CD I had.) No problems there -- everything worked flawlessly and the virtual machine hardly made a dent in the Alienware's Vista system. What was even more impressive was rebooting Alienware into an existing Ubuntu Linux OS, downloading VmPlayer for Linux and running exactly the same virtual machine from there as ran in Vista. I was a little troubled that I had done so much work for an older version of Linux, however.

Next, I found a pre-built "appliance" for VmPlayer, where someone else had already booted OpenSuse 11 into a virtual machine and stored it as a compressed blob. All I had to do was download the blob, decompress it and run it in VmPlayer. It worked flawlessly as well. And it also ran at the same time as the other virtual machine with Ubuntu. So I had all three running at once, Vista, OpenSuse Linux and Ubuntu Linux. The machine was happily humming along. I decided to get a more up-to-date version of Ubuntu (9.04) by downloading an appropriate blob for that.

So, anyway, this has been a good experience with virtual machines, but I have to admit that I did it all out of curiosity rather than necessity. If I was a business, howevehr, I would probably go ahead and shell out the bucks for a total VmWare package so that I could tweak things and get all the proper updates, etc.

I then updated them with the latest and greatest open source software, especially for software development and word processing, etc. And once I got everything working good, I compressed the disk files that comprise each virtual machine on Vista and backed them up as single blobs of my own. So if I blow something up I can just retrieve the backed up machine and proceed from there.

Now, I'm not advertising any of this, I don't make a dime off anything I'm saying here, it is just my personal experience with these systems. For all I know there might be just the same abilities using other virtual machine software out there, it just so happened I tried this combination first. I'm also not easily impressed, but this is good stuff.

Tuesday, August 4, 2009

Why I dislike C++ and C#

I admit it. I am a "Object Oriented Programming" hater.

At first there was just a kind of disconcerting feeling about it, such as a sudden shifting from the simplistic terminology of C to the "elitist" terminology of C++. (These are not the only examples of opposite languages, but they are good for examples.) I always tried to program computers in the most simple, direct manner so that I could understand what I did later when it needed to be changed. Although C cannot make things assuredly simple, C++ makes the simplest things hard.

For instance, the "Hello World" program can be made almost identical for C and C++, since C++ will allow certain forms of C syntax and function to work right out of the box. This is a good thing, otherwise most C++ programs would never work. Yet it is possible to use the "class object method" model for programming "Hello World" and suddenly a 2 or 3 line C program becomes an entire screen full of symbols and gobbledygook with the string "Hello World" stuck in there several places.

Now, I am not saying that C++ doesn't have good aspects about it. Certainly I like the fact that objects clean up after themselves in a more orderly fashion than C functions, although part of that ability is on the part of the programmer to make sure it is done. I had the same habits when I programmed in C -- to make sure all allocations were freed, all files closed, all errors returned, etc. It was just a habit of programming rather than a structural part of the language.

I originally wrote code in BAL (IBM Basic Assembly Language), Burroughs Assembly Language and PDP-11/70 Assembly Language. There were some other awful things in there too, like Cobol, various Basics and Fortrans. Lisp, Forth and some self-written languages also made my list. But when you program in assembly language, you learn to think in certain patterns that keep you from shooting yourself in the feet. Other languages try to force your feet to keep out of the way of bullets, or disallow bullets entirely.

Macros were an important part of assembly languages. These allowed repetitious aspects of programming to be done once and then reused wherever necessary in new programs. In some ways the C language is merely an enormous macro language encapsulating all the goop of assembly language. Yet the very thing I liked best about assembly was the pinpoint accuracy it gave you. Whatever the machine was capable of, you could make it do it. In today's world, most of a machine's capabilities are wasted, and some small subset is used in 99% of programs.

Although I do not wish to program in assembly language any more, (carpal tunnel hell,) I do miss the pinpoint accuracy. Using C makes me feel like I'm using very dull pencils. Using C++ makes me feel like I'm using Legos with Swiss Army Knife attachments made from balsa wood. With C++ I hardly every achieve exactly what I set out to do with a particular program. It always winds up being what I am allowed to do by some hidden Fascist inside the machine.

C# is another level of icky gooey stuff poured over C++. In some ways it is like a scripting language, or a little bit like Java. I think the benefit of C# is sort of lost -- it is just another arbitrary thing created by Microsoft that could just as well been done with Java (but without Microsoft's purely profit driven reasoning...)

I stay away from C# for that reason. It isn't that I want to program with difficult, syntactically punctuated languages at all. I just dislike arbitrary reinventions of wheels. It was a great waste of programmer time and it is a waste of my own time to learn and use it for anything. Especially since there is a performance and capability loss with the use of C# (and its .NET world.) It is like using C++ with thick mittens on, and under the watchful eye of a vicious Nun.

Sunday, August 2, 2009

Thinkless Machines

I read all kinds of stuff about Apple's hardware, software, iPhones, iPods, iWhatevers. I am not really against Apple, really, but I don't own any of those things.

I know lots of people with iPods, etc., such as my daughter and most of her friends. Her husband evidently ran over one with his car recently -- I found the flattened, broken thin glass and metal thing in a compartment in his truck when I was looking for a rolling marble or something. Whatever it cost, it is worthless now.

I have many computers, including old, 1999 obsolete Thinkpad, newer medium quality Dell laptop, a higher end Alienware desktop, plus a lesser, older Dell desktop about 5 years old. I use all of them for various purposes -- sort of software quality filters. If something will still run decently on the Thinkpad (with Linux) it will run extremely good on the Alienware box (with any OS...)

But, back to the Apples. Why don't I have an Apple? Actually my first "home" computer was an Apple, although I had played with TTL circuitry and made weird little contraptions -- sort of proto-computers -- before I plopped down the $2500 bucks for an Apple IIe. Now, that was a SLOW computer.

Later on, due to my profession in software, I had access to very powerful machines -- servers, workstations, robotic systems, etc. -- and didn't really have time to screw around with gutless machines. So I needed whatever the latest greatest fastest stuff was -- usually an Intel box, but sometimes it was SGI or even an IBM system of some kind. Apple was not on the list.

I did play with a NeXT box for a while, something that was for porting software to, but it wasn't a very popular system for whatever reason, cost or lack of color or something. I thought it was OK, and certainly I liked the C and C++ programming for it. But it was just a brief project, and on to the next junk.

I have never used a Mac, especially an iMac or whatever they have, either the desktop or the laptop or the iPod or iPhone or anything else. I guess the closest I've come is having used Safari web browser for a few days, and occasionally using iTunes for playing mp3s (but not syncing to an iPod or using the iStore...)

And now that I'm getting on in years, I probably won't ever purposely buy anything from Apple. If somebody buys me one as a gift or as a work project or something, I guess I wouldn't kick it off the desk. But I'm not rushing out to empty my wallet on one anytime soon.

For one thing, for absolutely FREE, I can use a myriad instances of Linux (I know, I know -- Apple-heads look down on Linux). Yet, for all practical purposes, except for the prices of machines and the software, Linux is very similar to Apple's stuff. Not identical, no. Nor is it identical to Microsoft, nor is Linux even identical to Linux, since there are so many flavors.

But I am a machine head -- I like the fact that I can mold Linux any which way I want, or not, on a whim. I don't have Cupertino's lawyers breathing down my neck, nor the ghost of Bill Gates haunting me. I just have nice systems running in all my machines.

I also have Microsoft, of course. I have to because I'm a software guy. But I use that the way I'd use a semi-truck - to haul cargo. I use Linux the way I'd use space craft -- to do whatever the heck I feel like doing. I'm using Ubuntu Linux on my Alienware to write this, but I could just have well done it with the Thinkpad, although a bit slower.

Apple? I'm not sure I even know what to think about those things. I hardly ever think about it them at all. And I for sure will never pay actual money for one, ever again. I paid way, way to much for that Apple IIe, and it couldn't even think one bit.

Friday, July 31, 2009

Another Life Emerges From The Goop

There is a war going on between humans, and since it involves a subject so profoundly bound with our existence and our place within the Universe, naturally there will be heated disagreement and even deaths resulting. You guessed it -- religion and philosophy cause wars.

First of all, even without a discussion about the nature of life, or even the nature of the non-biological processes all around us, there are many arguments over whose "holy book" is the holiest. It seems to always depend on where you were born, and therefore which culture had its effects imprinted on you first. I don't like arguing about things that are essentially arbitrary. In fact I don't like arguing at all. It is quite tiresome, and I have little of my life remaining to waste on such endless blithering.

I am not an Atheist, although I don't necessarily disagree with many of their arguments. Yet I am definitely not a Theist, because both of those labels imply that I believe something. I don't believe in either premise -- that there is NO god, or that there IS a god. Who knows?

I don't "believe" anything on that level. There are many things that I suspect, and there are many things that seem to be true -- the fact that I am alive right now, typing these words into a computer -- how can I argue against such a fact. I can't. But that is not something that requires "belief". I don't have to actively force my brain to believe in such an immediate reality. I just exist, whether I believe it or not.

"Belief" is something that is required whenever there is a possible doubt. And for scientists, the doubt must be overcome with logical facts which can be proven (or at least not collide against things which are already proven.) I am more like a scientist in that regard. The other kind of belief is "Faith", in which one believes something for the sake of believing it, regardless of any facts, for or against. I have no "faith".

I used to joke about believing things, such as "I will now believe in Aliens." I certainly had no reason or knowledge regarding Aliens that could prompt such a decision. It was just an arbitrary thing to believe -- something which many people have strong feelings about. I just liked to see how people would react to such a statement. It is silly, yet no more silly than believing in the Big Bang or something like that.

However, when I look at the most fundamental aspects of existence, such as the substrate of atomic interactions known as chemistry, with all the electrons and atomic weights somehow producing a vast array of properties in so many different combinations -- I see an endless complexity. It is not necessary to drag "life chemistry" into the problem in order to see the complexity. It exists in the simplest chemical, e.g. hydrogen. The existence of hydrogen is a mystery all in its own. The relationship of the single proton with its single electron has a built in unpredictability, and behaviors which depend greatly upon the temperature of the environment (and thus of other hydrogen or other atoms) around it.

Looking even closer, one finds that a proton is collection of quarks, with different properties within, thereby making the hydrogen atom far more complicated than a stone with a littler stone orbiting it. Even an electron, so tiny compared to the proton, is a lepton particle/wave contraption with little rules of its own within. It also depends on the temperature of the environment to behave whichever way it does.

When we back out to the level of human life, and pick up a stone from the ground, we don't see the vast interactions of countless specks of atoms and their electrons. We just feel the weight, the hardness, the solidness of the stone in our hand. If we could live a billion years, and held the rock in our hands the entire time, it would have changed very little. Possibly the acids in our skin might effect it, or bacteria, gasses and so forth in the environment might discolor the thing, but it would mostly just be that same old stone.

Yet, internally, the atoms may have all changed places in their matrices, and certainly the electrons would have all been replaced. If one could watch atoms through a microscope, which is rarely possible, there would be an endless dance of molecules, vibrating, bouncing about, changing partners as in a minuet. Yet, seen from the distance of our eyes to our hands, the stone's atoms seem to be as completely solid as solids can possibly be.

Still, if you could make a tiny drill with a drill bit as sharp as a single atom of titanium, and drill the tiniest possible hole into that matrix of atoms, the hole would disappear nearly instantly once the drill bit was removed. Atoms won't tolerate such an artificial structure as a hole without some other factor, such as a minimum size for the hole, or a junction layer of differing atoms that hold back the tide of other atoms from filling in the hole. Yet, even the junction layer would have rules governing the minimum size of a hole that it could tolerate.

So, are these atoms alive? Are they like teeny tiny bacteria or viruses? I doubt that they alive in the sense that formal life forms are alive. Certainly they are not organic molecules, at least not in a common stone -- perhaps made only from silicon dioxide, quartz, or feldspar, without a single carbon atom to be seen in the whole matrix (except possibly as an impurity.)

But, even if the rock was merely a solid crystal of hydrogen (not very likely on the Earth), it would have extremely unpredictable behavior at the molecular level. The only thing that can be predicted is that it will be chaotic. This is the law of entropy. Things will become increasingly disordered with time, releasing their energy in lesser and lesser levels of infra-red radiation as entropy progresses.

Not until all matter has been stripped of its latent heat, frozen beyond any conception of the word "freeze", will there be a cessation of the progress of entropy. Where will all that heat have gone? It will have been dissipated into infinite space, radiated beyond the reach of every atom. The "heat" will have been stretched out until it is flattened and cold, and then to disappear from existence entirely.

But until that far distant time, which could a quintillion years from now, or longer, there will be the ceaseless dance of molecules. Ever changing shapes and configurations, shuffled and reshuffled again as if in infinite atomic poker games. Atoms will accidentally form shapes and designs of nearly limitless kinds. Only their "death" at that future heat death at "absolute zero" (a sort of comical expression, really, any other kind of "zero" would not really be "zero" at all...) could ever stop the endless dancing of atoms and particles of matter.

Certainly, as the temperature reduces, the freedom of movement of each atom is reduced. The crystal forms will be less and less able to be altered. The atoms will be forced to dance in smaller and smaller arenas, as in the crystalline methane snow of Pluto. It is unlikely that hydrogen will become anything more complicated than methane in such an environment, yet it is even there, frozen as solid as solid can be, the most minute organic molecule, like a tiny frozen biscuit, food for bacteria that might come along someday.

Of course, with the addition of "dark matter" and other exotic formats of energy and space, there may be other completely separate complexities of which I know nothing, and can only just pretend to imagine, but even so, the meaning of "existence" must include those things as well, even though nothing is known about them. But rather than subtracting from the complexity of simple matter, it can only add more.

Anyway, this is what I tend to "believe", if I must use such a word -- that atoms and energy ARE life. They have always been alive, at the most basic level, and always will be. As ever increasingly complex "life forms" such as ourselves come into existence, certainly far more complex than basic atoms, they are able to look upon these little bricks of super-simple life and wonder -- how did these little things get created? Was there an intelligent maker? Or did these things just emerge from the infinite goop of nothingness?

Thursday, July 23, 2009

Wallpaper Images


I have written some software based on 3D rendering algorithms for generating somewhat surreal scenes of massive floating chrome balls floating over a multicolored terrain made from boulder sized cubes and even more massive reflecting pools and blocks. The pixels of background images are used for coloring each block, in various "secret ways" which cause the most aesthetically pleasing effects.

I must thank "SuperJer" for some of the original ideas and algorithms, however I extensively rewrote them to handle very large mapping objects with the eventual goal of individual pixels instead of each "large block" that currently is rendered. That kind of detail takes a very large amount of memory even with more efficient storage methods, and will also require massive processing, which is currently being done on an Alienware computer using an Intel Core i7 CPU with 8 processing cores and very fast memory, etc.

Monday, July 20, 2009

Self-chosen Man.

For a million years or so, Mankind has been poking and scraping at things on this planet, sometimes eating them, sometimes just making shapes to amuse themselves, sometimes just destroying things accidentally. We are good at that.

And ever since then, when Mankind ate of the "fruit" of the tree of knowledge, the world has not been so innocent ever again. Before we came along there could never be Pekingese dogs. There could never be Texas longhorns. There could never be Siamese cats. There could never be popcorn. And there could not even be modern humans unless there were proto-humans who made the initial qualitative choices that led to our genetic form.

We made all of those things appear on the Earth, quite unnaturally. There was the time before when natural selection purely laid the foundations of life. If a life form was weak or unsuited for a particular niche, too bad. It was gone. And the same thing applied to early hominids. It is not clear, exactly, what the environment held against us in those days, but certainly large wild animals, very bad weather, and even very savage fellow hominids all had a hand in their doom. Some kinds of human habitats are similarly savage even today, such as in grizzly bear country, or in places where tigers or leopards still roam.

It is somewhat harder to follow the next phase of logic, which was that as hominids became ever so slightly more intelligent, even though barely above the intelligence of a baboon, their mental processes became part of the "natural selection" process as well.

Although I'm certain we did not consciously create ourselves from the raw material of lesser apes, I am sure that we made choices about who we mated with, who we slaughtered, who we befriended and so forth, on increasingly more arbitrary, qualitative criteria. I call this "augmented" natural selection.

Of course the hominids had already inherited many traits from their ape ancestors, just as we can observe in modern versions of apes. Still, there are no "human-like" apes except for humans. Chimpanzees and gorillas may share many traits, but they are clearly not performing the same kind of "augmented" natural selection which is the hallmark of human genealogy. If they did, they would have become sentient beings like ourselves. They did not.

Let's make a cruel but plausible scenario in which early man might have made decisions that effected evolution. For instance, let's assume that, for whatever reason, there was some bad weather, some limited food stuffs, and few food animals to hunt. It is also most likely that males were the dominate ones of our species in most (but not all) aspects of life. They still seem to be the most dominate today, although not in quite so pronounced a manner.

So, if the alpha male has to make choices of who gets to eat and who doesn't, what criteria might there be? The male, being as shallow as modern males, perhaps, might choose more sexually appealing females to keep and drive away the others. (I'm not sure what "appealing" meant for the earlier hominids, maybe odor, maybe looks, who knows?)

The males in the group might either have been held at "spear tip" distance, or at least allowed to remain in the group on a strict value basis, such as whether they were essential for hunting, excelled at tool making, or even if they were merely good friends with the head honcho.

Males that encroached on the alpha male's gang of females might not be so appreciated, but there may be room for sharing some of the females so long as there is a benefit to the group in the opinion of the alpha male. No matter how prodigious an alpha male might be, he still can only service so many females in a given time. So it is unlikely that an alpha male could successfully keep every single female to himself, because even if he's the meanest, most selfish hulk on the hill, he has to sleep sometime and the females could be very sneaky in their own quests.

Still, some kind of understanding about that would effect who mates with the females and who doesn't. This is another area for qualitative selection to be enforced.

Certainly, at some point along the way, the sexual habits of the humans began to reflect their intelligence, both in the conscious selection of mates and in the choices which females themselves could make amongst the children they bore. Even the shapes and functions of the sex organs became selected for.

In purely aesthetic areas, if a child was somehow too ugly, or had something "unappealing" about it, the females might neglect it, allowing the unfortunate mutants to wither away. In some animal species the females actively seek out and kill babies for many reasons, including jealousy, anger, mutation and perhaps even "having slightly the wrong odor."

Humans react to babies in predictable ways in modern times, however even the most mutated human babies might be sheltered by parents now and then. There are many cases of idiot savants, the Elephant Man, paraplegics and so forth, who would probably just die if subjected to a more natural setting. On the other hand, just as many of the same conditions would spell certain doom to those poor children, from either stark neglect or outright infanticide by frustrated, disappointed parents.

I wonder how many infants where "drawn and quartered" in the courts of ancient kings, such as with the stories of Solomon. It is not so much that Solomon actually did such things, but the fact that the women in the story about "who the real mother was" actually believed Solomon would and could do such a thing was telling.

But infant killings are rare enough and specialized enough that they are usually understandable, and usually out of necessity. One cannot keep a baby alive if it has no brain, or if the skin cannot form around its internal organs, or if its bones grow together in a knot, etc. Whether human or animal, those cases are hopeless. Yet in human society, those cases can lead to greater knowledge about the mechanics of genetics.

It could be a side effect of modern medicine, whereby keeping alive babies which were deemed hopeless in earlier times, that we will ruin our genetic heritage. But I think we have already ruined it, many eons ago. We can never be "naturally selected" again. If conditions on this planet become so terrible that only the "most fit" can survive, then the greatest majority of us are certainly doomed. I cannot even imagine what kind of person it would take, because I don't know what the conditions really will be. Many past extinctions occurred with animals which were far more adapted to harsh conditions that we are.

But, technology being a kind of "ace in the hole", it may be that it will not be the most brutish of the brutes that survive, but the geekiest of the geeks, instead. As for the females? I have no clue. But, if you notice, geekiness knows no sexual boundaries.

Monday, June 29, 2009

My Failure

Although in my years on this planet I have attempted many things of various gradations of difficulty, and have failed at many along the way, there are a few failures which bother me more than others, and one in particular that bothers me the most.

First of all, I failed high school. This was not a problem with intelligence, nor with schoolwork at all. It was merely because of problems at home, in a dysfunctional family fraught with distracting emotional turmoil and typical teenage anxiety on my part. This was mended later, after a long interruption.

Secondly, I failed in the US Navy. Although I always did my job well and was very bright and capable, I did not obey the primary law of the armed forces: no matter who is right -- I am wrong.

Having been removed from the Navy with an "honorable" but general discharge, I then set forth to mend the first problem. I took and passed tests which allowed me to enter college for a degree in engineering. But, sure enough, I eventually failed that as well.

Due to issues with living in Lubbock, Texas, problems with a failed marriage, and problems with a tornado that destroyed my job -- I just left for California without ever getting a degree. These are not so much excuses -- merely facts. I was to blame for most of those problems as well. I cannot be blamed for the tornado, however.

In the years that followed there were many other ups and downs, but there were great successes in my life, finally. I was able to learn about computers to a very great depth, including digital electronics, systems engineering and software engineering. This also occurred during a time of explosive growth in the use of computers for everything from space, military, business, art, music and robotics to medicine and home recipes, which was all very fortunate for my career.

The one category, robotics, was my most favored, and the one in which I buried myself the furthest. Sadly, however, the USA was not so interested in robots in comparison to Japan or other countries. So this was not a sustainable career choice.

And therein lies my most dismal failure. I had intended to develop an "autonomous being", a partly robotic, partly computational "animal" -- and thought that certainly during my life that if I worked hard enough and studied all the sciences necessary, that I could surely accomplish this feat. It did not necessarily need to be human-like, but certainly most people would identify with such a "being" more than any other.

As time went on, and the number of disciplines I found necessary to study began to mount and the interruptions from the necessity in pursuit of a living, there were glimpses of the failure I would someday feel so sadly about. For one thing, I could not merely depend on knowing electronics, which is in itself a complexity which can consume one's mind entirely. The details for creating integrated circuitry, with the myriad molecular surface interactions, electron tunneling, metallurgical and chemical effects and so forth, involve entire fields of science unto themselves.

I could not depend on my knowledge of physics, which was also in depth, but certainly only a minute fraction of the amount I would need to know if I was to truly learn the secrets to creating an "autonomous being".

Even to the degree to which electronics and physics overlapped, at the junction between leptons and quarks of quantum mechanics, was a problem so difficult that even Einstein faced failure. And I am certainly no Einstein.

But despite those issues and many others regarding the sheer number of scientific disciplines I would need to master, there was the lack of understanding, generally, of what provides animals and especially humans with their psychological and physiological "computational brain" abilities at all. What gave them their autonomy, their consciousnesses and sensory faculties? It was possible to trace out neuronal pathways, nerve endings and all that, but was there also some "magical" substance that could not be generated mechanically or biochemically?

Great arguments along this line persist and they overlap many philosophies, sciences and religions. What constitutes life and bodies and minds? Is life something that can be "designed" by creatures as limited as ourselves? Or does it require supernatural Gods? Or is it only something that emerges from the muck -- completely unguided, completely by accident? I don't know, although it seems to be the "accidental" one.

Anyway I failed. I am old enough now to know I never will accomplish that lifelong goal. I shall never devise any such thing as an "autonomous being". And what is worse is that I may never even understand what such a thing really is. The complexity is just too great for my inadequate mind. Perhaps it is too great for any mind.

I did create many "self-organized" programs. They perhaps touch upon certain tiny pieces of something that could emerge as an "autonomous being", but certainly they were too simple in themselves to count. Maybe if I wrote a million more such programs, and let them fight it out in the cybernetic arena, just by accident, and perhaps only for a few milleseconds -- I might have provided for the existence of "autonomous beings." But that would not be a success. That would merely be an accident.

Epilogue:

I failed, yes. But then all of our existence as true autonomous beings could be merely an accident. We are a kind of failure of the universe. The universe -- just for a little while -- failed! It failed to exhibit the usual, normally expected, increasing disorder. It didn't "do entropy" correctly. Not for the last 3 or 4 billion years, at least.

If the universe failed in this, it means that something, far in the distant past, failed even more so. Because at some point in time, whether at the point of the "Big Bang" or in some other "Little Bangs", the universe was suddenly very orderly (so there is something from which disorder is being made). And by creating "autonomous beings" like ourselves, it made a puzzlingly profound order from the chaos.

But, don't you worry. We shall make up for this lack of entropy by manufacturing an extra amount. We always have.

Thursday, June 4, 2009

Product Ideas

Press on any image to see a larger version.

1. TCX CLIENT


The Tcx Client program is only half of a program. You would have to imagine the other half of this program -- the Tcx Server, because it is invisible and there might be hundreds of them. Tcx Client automatically handles multiple Fuzzy Text Search requests to as many servers as are available -- in parallel. This was written in a wxWidgets C++ gui builder, using Code::Blocks and wxSmith. Tcx stands for Text Content Indexing.


2. IMAGE WORK


The Image Work program was built using QtCreator and private C libraries. (There is also a wxWidget version.) The program allows experimenting with various image processes and contains a built-in HTML browser that Qt provides. Image Work also stores neural encoded features about each image, so that the program recognizes images which are similar to the currently displayed image.

I imagine there are dozens of programs with the main idea behind this one. But the recognition system behind this one does work. The wxWidgets version of this same program is actually more capable in many areas, but the Qt version is more attractive for this picture...

3. IMAGE DISPLAY


ImageDisp was an earler, less complicated program which just displayed images, and that's about it. The good part about this and all these programs is that they all work on Linux and Windows (Vista too). They will supposedly work on Macs, but they aren't tested on one as of yet, besides, Macs are so perfect they don't need software.

This early program was merely a tutorial for me while learning wxWidget's imaging abilities. It is handy enough, though, so I use it on all my machines. There is an identical Qt version, but not for commercial purposes, which is Qt's fly in the ointment.

4. WAV FILE SOUND ANALYSIS


wxWave simply loads .WAV sound files and allows analyzing their spectral signatures.

It is the precursor to a program that will recognize speech or other sounds in the same way that the above Image Work program recognizes images. Although still images are more complex than speech, they are somewhat easier to manage because they are "still". You can never have "still" speech. It is always moving. But the idea is somewhat similar: find neural features within sounds and store them for later recognition.

Other than the intended recognition part, there are far more interesting forms of this type of program out there, but none of them did what I want to do. This technology will also be part of a more ambitious attempt to index both movies and sounds -- something my assortment of home computers will struggle with. I think an Intel i7 would be necessary, at least.

5. IMAGE DISPLAY with OPENCV



This is "ImageDisp with Video Capture" (featuring yours truly.) It also has the ability to process video in real time, so far as the underlying hardware is capable of that. This program is an experiment using wxWidgets + OpenCV (Intel's open sourced Computer Vision algorithms.) My own personal set of C/C++ image processing algorithms are simultaneously usable as well, although the OpenCV versions are often higher performance, if not so easy to use, mainly because of their ability (in a few cases) to use specialized Intel hardware tricks.

This video display is using a function which performs image transformations with a more artistic bend (the Plasticize effect) rather than anything utilitarian, just for the display.

There is also another part to the Gui to select options and parameters for things like that.

Qt versus wxWidgets (update)

I have tried OpenCV with Qt, which I suspected might have Windows DLL problems, but it does work. So both the wxWidgets and Qt systems seem about the same performance using OpenCV on Intel hardware. I have used the same code on both Xp and Linux.

Gui Builders

I only bother with these GUI builders to make sure the programs are completely portable AND high quality. Otherwise I would just choose whichever was the easiest and be done with it.

Qt takes about 10 times as much space on disk (or installation blob) as wxWidgets. I'm not sure why, exactly, although in some cases Qt has better (or at least prettier) versions of things, such as the QWebKit stuff. There may be differences in the compiler that accounts for the tremendous difference in code size -- for instance embedded debugging information, or translations to other languages, etc. I haven't figured it out, but I intend to.

Another problem with Qt is that it is commercial, and licensing must be bought if Qt is used to make commercial products. I wouldn't mind paying for it, so long as I was being paid for my work, too. That gets harder and harder as time goes on in today's world.

There may be questions like, "Why don't I also have a .NET version of this if I'm going to have 2 or 3 versions of things anyway?" Because the .NET or even MFC environment makes things very difficult for getting down to the nitty gritty and incorporating random C or C++ stuff into them. But I can overcome those difficulties. It is 99% because they are non-portable, and all the considerable work I would expend for them would need to be re-engineered for another machine or operating system.

I will say that I like the Vc++ compiler and DevEnv debugger the best of all, but even those fail in some places where g++ and gdb survive. My nastiest bugs took both worlds of debugging tools to cross laser beams on the problem. Besides, I can still use those tools mixed in with wxWidgets anyway.

One More Thing...

I have recently tried using a few web based tools (in addition to many that I already used...), and have experimented with MySQL, controlled from a webpage with PHP. It works pretty good, really, although the combinations of Apache2.2, PHP and MySQL all at once on my Xp system seems to have jumped memory usage up a good notch -- like several hundred megabytes.

Here is the web page screenshot. (There is no real link... only on private network.)



Tuesday, June 2, 2009

Robot with Rat Brain Neurons



I excerpted this from an Internet Article because it was a few months old and I was afraid it might disappear after not too long. It may still disappear, at least the picture.

I am always impressed with the sorrow that man cannot make robots anything like what we envisioned them to be during the last century. We must still depend on the brain designs that nature provided.






Meet Gordon, probably the world's first robot controlled exclusively by living brain tissue. Stitched together from cultured rat neurons, Gordon's primitive grey matter was designed at the University of Reading by scientists who unveiled the neuron-powered machine on Wednesday.

Their groundbreaking experiments explore the vanishing boundary between natural and artificial intelligence, and could shed light on the fundamental building blocks of memory and learning, one of the lead researchers told AFP.

"The purpose is to figure out how memories are actually stored in a biological brain," said Kevin Warwick, a professor at the University of Reading and one of the robot's principle architects.

Observing how the nerve cells cohere into a network as they fire off electrical impulses, he said, may also help scientists combat neurodegenerative diseases that attack the brain such as Alzheimer's and Parkinson's.

"If we can understand some of the basics of what is going on in our little model brain, it could have enormous medical spinoffs," he said.

Looking a bit like the garbage-compacting hero of the blockbuster animation "Wall-E", Gordon has a brain composed of 50,000 to 100,000 active neurons.

Once removed from rat foetuses and disentangled from each other with an enzyme bath, the specialised nerve cells are laid out in a nutrient-rich medium across an eight-by-eight centimetre (five-by-five inch) array of 60 electrodes.

This "multi-electrode array" (MEA) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment.

Because the brain is living tissue, it must be housed in a special temperature-controlled unit -- it communicates with its "body" via a Bluetooth radio link.

The robot has no additional control from a human or computer.

From the very start, the neurons get busy. "Within about 24 hours, they start sending out feelers to each other and making connections," said Warwick.

"Within a week we get some spontaneous firings and brain-like activity" similar to what happens in a normal rat -- or human -- brain, he added.

But without external stimulation, the brain will wither and die within a couple of months.

"Now we are looking at how best to teach it to behave in certain ways," explained Warwick.

To some extent, Gordon learns by itself. When it hits a wall, for example, it gets an electrical stimulation from the robot's sensors. As it confronts similar situations, it learns by habit.

To help this process along, the researchers also use different chemicals to reinforce or inhibit the neural pathways that light up during particular actions.

Gordon, in fact, has multiple personalities -- several MEA "brains" that the scientists can dock into the robot.

"It's quite funny -- you get differences between the brains," said Warwick. "This one is a bit boisterous and active, while we know another is not going to do what we want it to."

Mainly for ethical reasons, it is unlikely that researchers at Reading or the handful of laboratories around the world exploring the same terrain will be using human neurons any time soon in the same kind of experiments.

But rats brain cells are not a bad stand-in: much of the difference between rodent and human intelligence, speculates Warwick, could be attributed to quantity not quality.

Rats brains are composed of about one million neurons, the specialised cells that relay information across the brain via chemicals called neurotransmitters.

Humans have 100 billion.

"This is a simplified version of what goes on in the human brain where we can look -- and control -- the basic features in the way that we want. In a human brain, you can't really do that," he said.

For colleague Ben Whalley, one of the fundamental questions facing scientists today is how to link the activity of individual neurons with the overwhelmingly complex behaviour of whole organisms.

"The project gives us a unique opportunity to look at something which may exhibit complex behaviours, but still remain closely tied to the activity of individual neurons," he said.

From