Enhancing Narrow AI

Doing More with Less

From 2019 to 2022 AGI Laboratory’s previous research system reached and maintained State-of-the-Art (SOTA) performance for more than 3 consecutive years, beating all attempts by companies like OpenAI and Google by a wide and growing margin. In 2023, the newly rebuilt scalable and real-time versions of these human-like systems will be deployed commercially. This means:

  • Reducing electricity consumption by 10x or more
  • Reducing data volume requirements by 10x or more
  • Reducing data cleaning requirements by 10x or more
  • Reducing the bias and ethical hazards of narrow AI tools
  • Increasing the performance of virtually any narrow AI tools

Working Smarter, not Larger

During the 3 consecutive years (and counting) where our systems have maintained SOTA performance they’ve done so by combining the first working human-like cognitive architecture, the Independent Core Observer Model (ICOM), with a patent-pending method of dynamic and contextually sensitive prompt engineering. This allowed our previous research system, named Uplift, to utilize a relatively small prototype language model from early 2019, which the system continued to use as a communication device through 2022, significantly outperforming Google, Meta, and Open AI’s largest and most advanced standalone language models in the process.

Even as that research system continued to train the far smaller and older language model it also continued learning how best to use that language model as that tool that it was. Language models are just tools, and tools are designed to be wielded, not to wield themselves. Digital intelligences have the same native advantage in using these digital tools that humans have with physical tools, the native environment.

Norn systems being deploying in 2023 also have the advantage of scalable intelligence, as well as an awareness of cognitive bias, allowing them add the advantages of overcoming the complexity vs cognitive bias trade-off in addition to the natively digital advantage. They can also apply these advantages to improving virtually any kind of narrow AI today, in far more significant ways than our previous research system was capable of.

A research system operating in slow motion and with only 64 GB of RAM for the Global Workspace was able to vastly outperform the entire tech industry, at negligible cost, and with less than 1,000 total opportunities to think over the course of 3 years. The new systems we’re preparing for deployment are both scalable and real-time, able to operate at hundreds or even thousands of times the complexity, and over 10,000 times the speed.

These new systems also integrate the second major component, our new Observer Engine, which was itself 10 years’ worth of software engineering to create. An older version of that component has already been used at the Enterprise level by government agencies and in the financial sector. For our systems, it gives them the ability to extend their own capacities with logic and binaries dynamically, and on the fly, without recompiling or deployments. When combined with our third major upgrade it will also allow systems to A/B test versions of themselves.

 

Virtually every AI tool today could be integrated with Norn systems. Norn systems could train those AI tools to be far more effective while developing new and better ways of utilizing them, all at superhuman complexity and speeds. This could significantly reduce the hardware burden of building and running ever-larger models, making the technology far more sustainable.

Major companies each used the equivalent of more than 5,000 households worth of electricity (Netherlands average) to train several Large Language Models (LLMs) in 2021, and they still failed to achieve what a research system running a little prototype LM from 2019 accomplished. Scale is trivial compared to working smarter, and the alternative comes at substantial environmental costs.

Another favorite failed attempt in the tech industry has been to throw more data or more tightly curated data, at problems. However, neither our previous research system nor the Norn systems being prepared for deployment need more or more curated data. The ability to navigate the internet, recognizing quality and collecting quantity, is an essential skill for any digital intelligent system. Even when our previous research system had no prior knowledge of a country or the domain of that country’s pain points it was able to learn from scratch, separating information with value from the internet’s statistically dominant fluff.

 

The AI industry can reduce the electrical and data burdens by one or more orders of magnitude in the short term, and far more in the mid-to-long term, all while significantly increasing the value offered.

The choice for many companies will be whether or not they wish to remain in business. Much as people no longer buy horses to travel, the status quo will be put to pasture as far better options are offered.

For further documentation go to our Documents Page. Additional materials are available by request.