These "clever" models are guaranteed to [think}(https://ai.neocities.org/EnThink.html) and reason in English or German or Russian or ancient Latin. The Mentifex AI Minds are an astonishing success because they implement concepts.
> Can it perform every task that a 2 year old human can?
MindForth can now use human language on approximately the level of a two-year-old. That is, the AI can understand and generate simple sentences and learn an unlimited number of new concepts expressed as words. It can not "perform every task" like moving around in the physical world, for lack of robot embodiment. I am hoping to get robot-makers to install the AI in their robots. The old version of MindForth has an InFerence module which can engage in automated reasoning by logical inference -- not yet implemented in the new MindForth.
> Can't you just give it access to a microphone and webcam so it would have some input from the outside world (assuming it can process it).
Right now the Perl AI and the Forth AI process input from the keyboard as if each character were a phoneme of human speech. A microphone would require speech-to-text conversion, or a re-write of the AI to deal with acoustic phonemes instead of keyboard characters. On Usenet in comp.lang.perl the Perl experts are trying to decide whether my Perl AI is reality or humbug. I am hoping that the Perl AI will come to life on a host of Apache webservers, and that the Forth AI will become the mental heart and soul of legions of autonomous robots. I call each AI Mind a "general AI" because, in contrast with "narrow AI" devoted to a specific task, the Perl and Forth AI Minds are the germinating nucleus of mental entities on the path to equaling human intelligence and then advancing further towards superintelligence.
https://news.ycombinator.com/item?id=12330663 is where the Ghost Perl Webserver Strong AI is up for discussion at Hacker News. Anyone with an opinion, please state it there, or answer any questions raised by other Netizens.
> We are back to very vague claims, I don't know what you mean when you say "use concepts".
I mean that the central aspect of all six of my AI Mind programs is that they are based on a theory of*concepts for AI*. The AI Mind receives English words as input and tags each incoming word over to an already known concept or to a newly created concept. Past AGI projects are not based on concepts as far as I know, and therefore the failures of GOFAI (Good Old-Fashioned AI) are not my failures.
> I want to see actual output from your program that you think demonstrates a capability that is not reproducible by GOFAI or Prolog methods.
Today at http://ai.neocities.org/forthagi.txt I have uploaded the newest retro-port of the ghost175.pl on GitHub back into Win32Forth. I completely reconstructed and simplified the AudRecog mind-module from scratch, and I finished allocating the time-points for all the English words in the MindBoot sequence. Now that the bootstrap of innate knowledge and useful words is finished, I hope speedily to bring the MindForth AI back up on a par with the Perl AI and provide all kinds of sample interactions between the AI Mind and its user. But no single interaction, even if it shows logical inference, will be spectacular and overwhelming. The spectacle will be in seeing the flickering Forth input prompt and the immediate, intelligent response to whatever you input to the AGI Forthmind -- which will not grind to a halt for each user input as the Perlmind must do, but will think continuously and immortally as a challenge for any party to host the world's Oldest Living AI Mind.
| I can use some preexisting code to reject the "AN" form and to use "A" before a consonant.
> Again, manually coding all these rules is literally impossible, especially since they evolve over time. Have you not realised this after working on it for over 13 years?
My AI Minds differ from the lamentably non-mind software of other AGI projects because MindForth, the JSAI and ghost175.pl are all concept-based True AI.
> The AI techniques I work with learn rules like this automatically from examples, I don't hard code any grammatical rules or build up the lexicon by hand.
I am mainly concerned with showing how an AI Mind will think once it has internalized the rules of English grammar or German grammar or Russian grammar -- three human languages in which I myself am fluent and in which my AI Minds are becoming fluent. Because I want to show how the concepts in the Perl @psy array or in the Forth psy{ array interact under grammar supervision to generate (and comprehend) sentences of conceptual thought, I don't really care how the rules got there. I figure that my own release-and-catch (I release them; other Netizens catch them :-) AI Minds will eventually evolve into the learn-by-example entities that you describe and that human infants arguably are. But for my work in mind-design, setting up learn-by-example in these early AGI stages is too complex. As soon as one of my AI Minds was able to function at the level of Subject-Verb-Object (SVO), it was replicating the evolution of Homo sapiens into a thinking species. As I have added in logical inference, direct and indirect objects, and comprehension of prepositional phrases, my AI Minds have become able to understand and think with the complexity native to a human toddler of about two years in age. Although I hand-code some essential bootstrap words, each AI Mind recognizes known words with the OldConcept module and learns new words with the NewConcept module.
> all AGI projects that have tried to build rules by hand have all found that this is not a scalable solution.
As I release my AI Minds as state-of-the-art True AI, AGI shops other than my own loan maverick enterprise will have the privilege and opportunity first of coding in more rules and then shifting over from hand-coded rules to rules learned by example, and from keyboard textual entry to microphone phonemic perception, and from isolated AGI to robot-embodied AGI. For my work in mind-design, scaleability is 'a function of how big a corporation picks up my proof-of-concept conceptual AGI and runs with it.
> The AI community has learnt the hard way that automated learning is the only sustainable way forward, please don't ignore this by repeating the same error.
The AI community has another think coming.
> Give me a concrete example of something your system can do but Logic Programming can't.
The MindGrid of my Perl AI engages the user in conversation and retrieves conceptual knowledge from memory to respond to user input. Once the InFerence Module is coded in, as it already exists in MindForth and in the JavaScript AiMind, the AI is able to infer new knowledge from the combination of two items of previous knowledge such as, "Women have a child," and "Mary is a woman," so that the inference is expressed by the output, "Does Mary have a child?" and any "No" answer adjusts the knowledge base to negate the inference that Mary has a child. Perhaps these inferences can easily be done in a Prolog program; I don't know. I suspect that my AI programs differ from Prolog logic software in that my AI Minds use concepts for thinking. Not every input and response will involve logic. Most of any conversation will simply be a rambling discussion of old concepts known by the AI and new concepts being introduced by the user. Meanwhile, for the porting of the ghost175.pl AI into the simplified Forth AGI I have been expanding the MindBoot sequence and I have run into a major problem with the AudRecog module, so I must stop and do some serious troubleshooting and debugging. But yesterday I made a major change in the cognitive architecture of my AI Minds. The MainLoop no longer calls the major cognitive modules, but now simply calls the sensorium module for sensory input and the volition or FreeWill module for decision-making by a robot. FreeWill calls a sequence now of the Emotion module, the Think module, and the MotorOutput module. The sequence exists so that Emotion based on physiology may influence the operation of the Think module, and then the Think module may consider its motor options and feed its choices into the MotorOutput module for execution.
The easiest proof is to use MSIE to run the http://ai.neocities.org/AiMind.html JavaScript AI and either let it run by itself for a while or enter a statement like "boys play games" and then "john is a boy". The JSAI should then ask you, "DOES JOHN PLAY GAMES", because the InFerence module reasons from past knowledge to create new knowledge.
> I don't believe you. If it does anything at all, it should be able to process some standard dataset and perform a task.
The AI Mind processes not "datasets" but ideas. The only "task" it performs is to think. But please let me point out something very special and important here. All six of the AI Minds listed upthread show the creation of concept-based AGI starting out from a very simple core of subject-concepts associating to verb-concepts and to object-concepts, as in the sentence "God plays dice." Over time, and as each new AI Mind came into being, the central conceptual mind-core has expanded into gradually more complex modes of thinking, including the achievement of automated reasoning by means of InFerence in 2012. Each of the six AI Minds is an AGI, and a progressively more sophisticated AGI. When the Perlmind AGI has achieved full functionality on a par with MindForth and then somewhat beyond MindForth, the vast army of 400K Perl coders out there may or may not take up the Grand Challenge of coding Perl AI a lot further than my own initial offerings. We shall see. But I will try to isolate each mind-module in both its behavior and in its on-line documentation so that any Perl coder may choose to concentrate creative effort on any arbitrarily chosen module. Then Behold the Singularity.
> Also, can you please give me your definition of AGI?
To me, Artificial General Intelligence (AGI) is simply the opposite of narrow AI. However, to me AGI does not have to be full-blown adult-like human-level AI, because even early in the AI maturation process, a program like the Ghost Perl AI can belong to the AGI species as a kind of infant AGI on its way to becoming a human-level AGI.
> Why would it still be an AGI if it might only be an expert system?
Upthread I did not say that the AI Perlmind would be only an expert system, but rather I was suggesting that it might become similar to an expert system by absorbing vast fields of knowledge by reading textfiles into its experiential memory. As I recall, expert systems are a kind of queryable knowledge base for a narrow-AI purpose. The Perlmind would be a general AI with sufficient knowledge to act like an expert system.
My systems engage in Logical Inference. In the new Forth AGI for robots I am having the MainLoop call the FreeWill module, which in turn calls the Emotion module and the Think module and the MotorOutput module, so that Emotion can influence Thinking to initiate MotorOutPut. The virtualentity site is not mine and has not made my files available for about six years.
I have just revised my Advogato profile to reflect my current campaign to give away Prior Art AGI mind-modules as Grand Challenges for AGI Project team-members to work on. Earlier today I have been coding and twice uploading the forthagi.txt new version of the MindForth artificial intelligence. When you say above that I "clearly have a passion for AI" and when I obviously keep trying to put my ideas forward at every opportunity, it is because I spent the first thirteen years of my AI work creating a Theory of Mind for AI. I know that my programming skills are very low and I admit that I have learned Forth, JavaScript and Perl just enough to convert my theoretical model of a mind into thinking AI software. When you recommend "unit tests" and "getting a concrete objective measurement of how well your system performs on a given task", I have a specific scenario of testing in mind for when I bring the new, simplified Forthmind up on a par with and then beyond the legacy MindForth which achieved automated reasoning with Logical Inference. In the Forth community and elsewhere I will announce that Netizens are free to download the AGI software and run it through its paces, that is, test the Forth AGI for all the different parameters of its ability to think. The human users will see how they can teach the AGI to conceptualize new English words and to accumulate a knowledge base (KB) on any subject. The users will see how the Forth AI only checks for human input and otherwise races ahead with its own quite visible thinking. Because each of the several dozen mind-modules will be exhibiting at least a working minimum of mental functionality, initially Forth coders and Perl coders may try their hand at improving or expanding the AI Mind. The "textual outputs" will show that the AI Mind is looping through thought-comprehension and thought-generation modules. Additional modules are stubbed in for Emotion and Free Will and Motor Output. I really don't think that I have been "trying things that have been tried years ago" -- as you say above. I follow the AI news quite steadily and I have seen a lot of AI projects come and go. I feel that my Forth AGI will become an Existence Proof of True AI.
Ghost Perl AGI is moving towards automated reasoning by means of logical inference:
_______________________________________________________________ | @psy array Equilibrium in the Perl AI MindGrid | |---------------------------| | | MindBoot() Sequence | | | | | |t= 518 YOU ARE MAGIC | | |t= 530 I AM ANDRU | | |t= 541 I AM A ROBOT | | |t= 554 I AM A PERSON | | |t= 568 I HELP KIDS | | |t= 583 KIDS MAKE ROBOTS | | |t= 602 ROBOTS NEED ME | | |t= 616 WOMEN HAVE A CHILD | | | | | |t=2433 I KNOW BOYS |<-- input = "You know boys" | | | | | |output = "I KNOW BOYS" | |t=2441 I KNOW BOYS |<-- re-entry | | | | |t=2457 BOYS PLAY GAMES |<-- input = "Boys play games" | | | | |t=3003 GAMES HELP BOYS |<-- input = "Games help boys" | |________________________|________________________________|
The Perl AGI has recently attained stable AGI functionality with the ability to remember inputs and associate from one idea to other ideas. The MindBoot() sequence diagrammed above shows general statements like "Women have a child" or "Boys play games" from which the AGI may engage in automated reasoning for inputs like "Mary is a woman" or "John is a boy". Once coded (ported from Forth or Javascript), the Inference() mind-module will ask for confirmation or refutation with questions like, "Does Mary have a child?" or "Does John play games?"
The Mentifex AI Minds -- MindForth that thinks in English; Wotan who thinks in German; and Dushka -- she thinks in Russian -- are not yet "smarter than us", but they are now able to think with automated reasoning by logical inference and they demonstrate the Rise of Machine Intelligence.
Free, open-source Mind.Forth in English and Wotan Strong AI in German can already engage in abstract reasoning with InFerence -- an Artificial Intelligence e-book available in Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Spain, United Kindgom and the United States.
The Artificial General Intelligence (AGI) mail-list has the same topic at http://www.listbox.com/member/archive/303/2013/10/. In response, please be advised that some Strong AI Minds in English, German and Russian are still in the "text-based" phase because they have not yet been embodied in robots with visual recognition inputs. These AI Minds do handle "simple context-sensitive language" when they engage in automated reasoning with InFerence as described by an Amazon e-book in Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Spain, United Kindgom and the United States.
I design Cyborg Minds and publish the AI details in e-books such as Amazon http://www.amazon.com/dp/B00FKJY1WY with 25 Chapters: Introduction; Function; Code; Purpose; Logic; Belief; Volition; Robots; Synergy; Forth; JavaScript; Troubleshooting; AI Minds in English; Wotan AI in German; Dushka AI in Russian; Polyglot AI Minds; Porting; History; Future; MasPar; Superintelligence; Singularity; Links; Glossary; Variables.
Machines can already think. Mind.Forth thinks in English with abstract reasoning by InFerence. The German AI Wotan thinks in German and can reason with inference. The Russian AI Dushka thinks in Russian. These programs are free, open-source AI technology.
My undergraduate degree was in Classics (ancient Greek and Latin) and since high school it was always my ambition to create artificial intelligence (AI). I was more interested in philosophy and linguistics than mathematics and the so-called "hard sciences". I enjoyed reading Plato and Aristotle in the original Greek and Nietzsche in the original German. Throughout college and beyond I worked independently on designing inputs and outputs for an AI Mind, but my efforts at mind-design were stumped and stymied until I learned about the work of the Nobelist David Hubel (February 27, 1926 – September 22, 2013), who along with Torsten Wiesel discovered "feature extraction" in human vision. As an independent scholar, I just studied and learned whatever subject I felt I needed for my AI project -- laser technology in case holograms a la Pribram turned out to be the key to memory; neuroscience; and programming (Smalltalk; REXX; Forth; JavaScript). Gradually my Strong AI Minds became more and more powerful, culminating in automated reasoning with InFerence.