Ken's Korner Newsletter Logo February and March 2024
Artificial Intelligence

Artificial Intelligence
This edition of Ken’s Korner’s Newsletter covers February and March 2024.

Artificial Intelligence
Since the summer of 1956 when, at a conference at Dartmouth College (which was funded by the Rockefeller Institute) attended by, among others, John McCarthy from MIT and Marvin Minsky of Carnegie Mellon University. The term AI is generally attributed to the statement they made, “the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning”. This was more of a workshop than a conference and McCarthy and Minsky were among the handful of people that were active during the entire time. Most of the work at that time relied on formal logic.

Cute Robot.

In that zinc plated, vacuum tube environment the beginnings of AI took root. It remained a fascinating and promising technology. In 1957 the economist and sociologist Herbert Simon predicted that AI would succeed in beating a human at the game of chess within ten years. The road to modern day AI had its share of ups and downs but Simon’s prediction turned out to be right, thirty years later.

In the last few years Artificial Intelligence has begun to impact our lives. Even if we are not immediately aware of it. But what is artificial Intelligence? The short answer is, “The science of making machines that think like humans. That is a bit nebulous since we don’t fully understand how humans think. A complete explanation of would span volumes so the purpose of this month’s newsletter is to give you a general understanding of the nature of AI.

The terms Artificial Intelligence, Machine Learning and Deep Learning are commonly used by various media outlets and other organizations. While there is some overlap, they are not synonymous but rather they describe different parts of the whole. If you consider that AI is the trunk of the tree, then Machine Learning would be a large branch of that tree. Deep learning would be another branch growing from the machine Learning branch.

Artificial Intelligence is a computer science that aims to simulate human intelligence in software and machines. AI can process enormous volumes of data in seconds. It would take a human scientist years to do the same.

Machine Learning uses algorithms to find patterns in the data. Much the same way that humans learn. Netflix uses machine learning to analyze movie choice and make recommendations to its subscribers.

Deep Learning is a complex subset of machine learning. It can perform complex tasks without human intervention. One example of using deep learning is finding disease in MRI scans.

These three branches are not mutually exclusive. Self-driving cars use artificial intelligence machine learning and deep learning in combination to accomplish the task of driving the vehicle.

Types of AI
There are two main categories of AI.

  • Weak AI
  • Strong AI

Generally weak AI can only perform one or a few specific tasks such as playing chess or translating text. Strong AI can perform any task that a human can perform. Currently weak AI is the only type we have. Strong AI is only theoretical, but that could, (and probably will) change in the future.

The creation of Artificial General Intelligence (AGI) is underway. It is a monumental task, and the major tech companies are racing to be the first to develop AGI. Super AI is when artificial intelligence exceeds human capabilities. That is still way off in dreamland, so I have left it off the list, for now.

Within these two categories, and currently only the first, there are several “macro” areas such as:

  • Text AI
  • Visual AI
  • Interactive AI
  • Analytic AI
  • Functional AI

There is some disagreement on just how many such categories exist. Some say seven, some say nine, and that number keeps growing every day. For the purpose of this newsletter, I am going to stick with these five.

Text AI focuses on things like language translation, text recognition and speech to text conversion. It is a valuable tool for businesses.

Visual AI does much the same thing as Text AI but with pictures instead of text. It is often used in security systems, equipment monitoring and maintenance and damage and damage recognition for cost estimates in automotive body repair.

Interactive AI is like a chatbot. It can carry on a conversation and answer predefined questions. When aided by machine learning and deep learning it can understand the context of a conversation. In business this can mean improved communication because it is available 24/365.

Analytic AI is used for and influenced by machine learning and deep learning tech. Its focus is scanning very large datasets to recognize recurring patterns and relations. Big businesses use this information in their decision making.

Functional AI is much like Analytic AI. It searches through massive amounts of data looking for patterns. The main difference is that Functional AI can act. If it was monitoring a factory floor and detected a malfunction it would send a warning, but it could also trigger a shutdown.

Types of AI Models
There are a number of different AI models, and more are being created every day. Here is a list of some of the most popular examples.

  • Linear regression
  • Logistic Regression
  • Decision Trees
  • Random Forest
  • K-nearest Neighbors
  • Deep Neural Networks

You can see more at: https://viso.ai/deep-learning/ml-ai-models/

The AI models that receive the most attention these days are the generative models. Generative AI can generate content like text or images.

There are four basic types of Generative AI models.

  • Foundation Models
  • Multimodal Models
  • Large Language Models
  • Diffusion Models

Training the AI model

Before the AI Model can produce any meaningful results it has to be trained. This is done by establishing various parameters for it to follow. These are the internal variables that the model uses to make predictions, decisions or take actions.

AI Training.

There are three main techniques for training an AI model. This primarily applies to machine learning models.

  • Supervised learning
  • Unsupervised learning
  • Reinforcement learning

With supervised learning the model is trained on labeled data and the desired outputs are already known. This approach is usually employed for tasks like Predicting a value or assigning object to certain categories. Such as filtering out spam emails.

Unsupervised learning uses unlabeled data. They are training the model to uncover hidden patterns or structures within the data. By clustering similar data together, it is often used to develop models that can detect unusual patterns that may indicate fraudulent activity or system failures.

Reinforcement learning involves an intelligent agent being trained to make decisions based on feedback. The feedback is either in the form of rewards or penalties based on its actions. The feedback improves the model’s decision making.

Five steps to train the AI model. While different situations will require different training strategies these basic steps will be a part of most AI model training.

  • Prepare the data set. Data is the life blood of AI. It is important to get good relevant data.
  • Model selection. Choose a model with appropriate architecture and algorithms.
  • Initial training. Input the data and identify any errors. Avoid “overfitting” where the model becomes biased and restricted to the training data.
  • Training validation. Input new data from a different data set and evaluate the results.
  • Testing input “real world” data that the model hasn’t seen and evaluate the result.

Training the AI model is a continuing job. Error analysis, comparing the results against actual values, benchmarking and documenting the results are very important. Over time the model will require less and less training, but the oversight never really stops. It is during the training that the parameters the model need to operate are developed and refined.

In the early days of AI people would enter the parameters manually and change them as necessary. That is not a practical solution now as ChatGPT 4 has over 175 million parameters. With modern AI models the parameters are generated by the model as part of the training. The model is fed a collection of sample data from a prepared data set. It generates its own parameters as it makes the decisions and then the results are tested. Basically, the model is training itself, modifying the parameters as needed to get the right result. When the AI model trains itself, things can go a lot faster, so fast that people just can’t keep up with what the model is learning. But what is the model learning.

A story is told about how the US Army wanted to use AI to automatically detect camouflaged tanks hidden in the trees. Thet took some fifty pictures of trees with tanks hidden in among them and then another group of pictures without the tanks in the trees. They prepared the dataset and ran the pictures through the model and judged the results. After a while the model was able to correctly identify the pictures with tanks and the pictures without tanks.

They fed in some new pictures from the original photoshoot that the model hadn’t seen yet. It was able to correctly identify them as well. It seemed the plan was working just as the army had hoped. Then they went out and took a new batch of pictures. When they fed the new batch of pictures into the model it failed miserably. The results were no better than chance. What happened?

It turns out that the original batch of pictures was taken at a different time of day. The group without tanks was in the morning. Then they moved the tanks in and took more pictures. But enough time had passed so that the lighting had changed, and the shadows had moved. The AI model had been learning the difference between morning and afternoon, not tanks or no tanks.

It is likely that this story is apocryphal. There are several versions of it on the Internet, so it is hard to tell which, if any of them are true. Also, the US army is not in the habit of releasing such information. But it does illustrate a very valid point.

There are other examples that are not apocryphal. Google’s translation (Google Neural Machine Translation or GNMT) developed a language of its own to help it translate between unfamiliar languages. GNMT did this without being instructed to do so. While this may be frightening it shows that there are rules of language that we do not yet understand. However, these new rules are somewhere withing the parameters of the AI system that GNMT has developed and is using.

The ancient Chinese game of Go has been considered the grand challenge for AI. The game is more than a googol times more complex than chess. There are 10 to the power of 170, (10X170) possible board combinations. That is more than the number of atoms in the known universe.

The programmers gave the AI, (AlphaGo) a basic understanding of the rules of the game and then let it play against itself to “learn” on its own. After just a few hours the AlphaGo was defeating grand masters of the game using moves that no one had seen before. It then lost to an amateur who defeated it by using some diversionary move to distract AlphaGo. That trick worked once but once but do you think that will work twice?

So, there are moves and strategies to the game that we do not yet understand but they exist somewhere withing the parameters of AlphaGo.

The bottom line is that it is important to watch what and how the AI is learning. That is assuming that you can keep up with it.

Not a robot?!

The hardware for AI
It is possible to build an AI neural network in Python and run it on your PC. While that can be valuable and perform a number of important tasks it would not compare with Amazons Alexa or Rufus, Microsoft’s Copilot or any of the choices on Azure. Nor even compare with Google’s lame Gemini. But it is possible.

The high-performance AI is built on a much more powerful platform. Nvidia produces some of the most in-demand hardware for AI.

Currently Nvidia makes the most sought-after hardware for AI. Their DGX H100 system, based on the H100 GPU. At 32 petaflops the DHX H100 is up to nine times faster at AI training and up to thirty times faster at AI inference speedup on large language models than its predecessor the A100 GPU. It also has a rather enormous power requirement. Each DGX H100 node power usage is 10.2kW maximum. Here is a link to an introduction of the DGX H100, (https://docs.nvidia.com/dgx/dgxh100-user-guide/introduction-to-dgxh100.html).

Microsoft, Google and Amazon are buying as many of these as they can. If you can find a new one for sale it will likely set you back around $400K. Nvidia’s SuperPOD with their NVLink switch system can connect up to 32 H100 nodes together. And that is just the start. Nvidia just announced the upcoming release of their newest GB200 Blackwell AI chip.

There are other companies that are producing hardware for AI besides Nvidia. AMD offers the MI300 which competes with, (some say exceeds) the DGX H100. There is also pressure coming from Intel, Cerebras, Tenstorrent, Grog and D-Matrix.

Sam Altman CEO of OpenAI who became famous for releasing ChatGPT has been traveling the globe and visiting some of the big money hotspots. He is looking to raise seven billion dollars to purchase more DGX H100 machines. The goal is to build an Artificial General Intelligence (AGI) model. This would be the “Theory of Mind” AI model. One of the Strong AI that I left off the list because we don’t have it yet.

The artificial intelligence that we have today is just Narrow AI. It can perform one or a few tasks very well. For example, the computer can beat a human player at chess or generate text. But those same computers could not plan their day or cook a meal. AGI would be able to perform any human task as well or better than a human.

OpenAI’s chief scientist, Ilya Sutskever recently presented some insights at TED. Here is a link to that video, (https://www.youtube.com/watch?v=SEkGLj0bwAU).

AGI would contain all human knowledge. Additionally, it could teach itself and potentially become even smarter. This brings us to the point of singularity, the point where computers are smarter than people. Futurists like Ray Kurzweil have predicted this for years now, but nobody really knows when it will happen or what it will be like. In fact, it could happen, and most people wouldn’t even know it had occurred.

The transhumanists have been waiting for singularity. To some it is almost like a religion. They call it Artificial God¬like Intelligence and plan to “merge” with the machine. It is unclear what that means. Are they going to be implanting electrodes into their brains or just having lengthy chat sessions with bots?

Elon Musk has stated that the ultimate goal of his neuro implant company Neuralink is to achieve symbiosis with artificial intelligence. This way humans could merge with AI so that they aren’t “left behind” by the more intelligent machines.

Up until recently AI was just some disembodied entity dwelling somewhere deep inside the computers. Figure aims to change that by bringing a general-purpose humanoid to life. Figure has an AI humanoid robot that can self-correct mistakes. You can see the video at (https://www.figure.ai/). Now AI has a physical body and can walk around among humans, animals and plants. This will be a new experience for AI as it will have a whole world to learn from and experience it much the way we do.

Before you get to thinking that this AGI is all that great remember that it can be wrong. Sometimes AI is so wrong that it is laughable. When Google’s Gemini was asked for a picture of the Pope it came up with an Indian woman. It also came up with a picture of Abraham Lincoln as a negro. These are not minor glitches or some error in coding. These are deliberate attempts to promote a specific agenda. People who unquestionably believe these falsehoods for whatever reason become tools of the people controlling the AI. A better approach to AI would be to use the tool – not to be the tool.

When I was a young kid growing up, we were taught, “only believe half of what you see and none of what you hear”. With the “deep fake” capabilities of AI it may be time to raise those percentages.

Love it, hate it, fear it or if you have combinations of these feelings, artificial intelligence is growing faster every day. Whether you are in business, transportation, politics, finance, health care, law enforcement, military, entertainment or just about any industry or job Those with help from AI will crush the competition that does not have AI.

Some people look to the government to limit the potential downside of AI. Others want to just ban it all together. That will be fruitless. AI is here and the genie is not going back into the bottle. Your only real defense is to make sure that you have the best AI out there. That is the reason for this “arms race” in technology and why so much money is being spent on Artificial Intelligence.

For more information about Artificial Intelligence and the work behind the scenes to develop it you can check out some of these links.

For more information about singularity you can visit SingularityNET, (https://singularitynet.io/).

https://shield.ai/ Building The World’s Best AI Pilot

https://www.coe.int/ai Towards an application of AI based on human rights, the rule of law and democracy.

https://www.anthropic.com/ AI research and products that put safety at the frontier.
  Claude 3 https://www.anthropic.com/claude. Claude is a family of foundational AI models

Paralyzed man with Nueralink implant can control a computer and play chess via his thoughts, (https://twitter.com/CitizenFreePres/status/1770580672194981993?t=v97zM_qEi7LCYhNdnLn4Fg&s=03)).

Joe Allen is a little crazy but he makes some good observations at https://substack.com/@joebot

And if that hasn’t scared the daylights out of you check out this episode of The Unknown - Killer Robots on Netflix.

If you know someone that you think would enjoy this newsletter, share it with them and ask them to join using the link at the bottom of the page.

 

And remember — always back it up!

 

 

Go back to the top

To get the Ken's Korner Newsletter delivered to your Inbox CLICK HERE