LIBERTY AND JUSTICE FOR ALL

U

Search

Many Voices, One Freedom: United in the 1st Amendment

April 18, 2024

M

Menu

!

Menu

Your Source for Free Speech, Talk Radio, Podcasts, and News.

Print Friendly, PDF & Email

Medical Artificial Stupidity is the next frontier. We are living through the biggest medical blunder in the history of medicine, and we keep compounding it. Half the country has found out that “We wus had,” but the other half is still coming back for more shots, never mind the stats saying that with every incremental shot, you are taking a direct hit of 7% in an increased risk of being hospitalized with Covid, not to mention your risk of shortening your life expectancy with some of the adverse events.

There is no question that technology can help in health, with many useful applications in terms of managing information. Still, the downside is that technology can become yet another thing that inserts itself in the relationship between doctor and patient if it is not used right. It is up to us to manage it wisely.

I have been seeing ads lately for a company called Forward, but they are swimming against the tide by promoting their technology as the solution. Technology is NOT the solution. At best, it is neutral. What both patients and doctors need is a better working relationship: all of the medicines and the technology have a terrible way along the lines of what Ivan Illich described so well in his book Medical Nemesis: The Expropriation of Health. What we have lost, in our culture of a pill for every ill, is precisely that: the ability to take ownership of our own health. Agency. Dr. Vinay Prasad, who is frequently a wonderful commentator on what works and what does not in healthcare, published a substack the other day titled: “Chat GPT will change Medicine” and got this delicious comment from a certain Dr. K.:

Dr. K – Mar 29·edited Mar 30

Vinay, This is my area of expertise on which I have been publishing since the very first article on Medical Informatics I wrote for Science many decades ago. ChatGPT, on which I have pretty profound experience, is a language engine, not an all-knowing “artificial intelligence.” As you play with it more, you will discover that we have yet to achieve “artificial stupidity,”….and it is a long slog from there to anything “intelligent.”

Someone, please give that man a cigar!

Please do not misunderstand me. I am not a Luddite, I am actively involved with a company that is bringing a novel AI machine, built with live neurons, to market, and I see quite the positive potential, but I also think of it critically. In this case, the neurons will take over the ‘traditional’ roles of the processor and the memory. So I am committed to the AI future.

However, what we do not need is Dr. K’s ‘artificial stupidity.’ My own experience with ChatGPT suggests that it is operating at the level of artificial stupidity. The most fun I had with it was that it apologized whenever I pointed out that it brought in extraneous information instead of answering my question, which was, “What did Jesus mean with the phrase ‘My Kingdom is not of this world.’?”

I got long-winded answers that brought in all manner of theological concepts and a whole catechism class of second-rate theology, instead of attempting a straightforward answer to the question. This kind of experience immediately suggests how and why this could be a disaster in the making in medicine. Artificial stupidity, indeed. Producing stupid answers faster is not likely to be helpful, but bringing about disasters faster, ‘at the speed of Science,’ so to speak – pun intended.

What is AI anyway, besides being a misnomer?

Personally, I would always advocate for artificial reasoning, as opposed to artificial intelligence. In the materialist universe, 2+2 =4, but in the world of true intelligence, 2+2 can equal 5 or 3, depending on the circumstances. You could call it intuition or inspiration, but it is that non-quantifiable element that made Albert Einstein say that man is a non-local being having a local experience. The mind is completely abstract, but the ego thought system is all about specifics, which is the reason why Jesus said that ‘to those outside the Kingdom, it all comes in parables,’ everything dualistic is at best a parable, for truth is one, and utterly non-dualistic, by definition: there cannot be untruth within truth.

Again, in the materialist world of perception, it definitely seems 2+2=4, and if not, something is wrong with you. Computers are, in principle, devices of applied mathematics, and in the mathematical model, deductive reasoning is a struggle. As a result, special languages were developed that are more suited to what is called ‘AI,’ and later yet came chips emulating neurons as a more efficient way of implementing AI – virtually all the majors are developing these neural emulating chips today, and Microsoft even included a neural graphics chip in one of the models of the 9th generation Surface Pro tablet (Surface Pro 9 with 5G).

In other words, this is a silicon chip emulating the behavior of a neuron. On the whole, this model of ‘AI,’ is all based on the same logic that made Terry Bisson comment on “meat that thinks.” Materialism thinks the mind is an epiphenomenon of the body, whereas metaphysical idealism and similar schools of thought consider the body to be an epiphenomenon of the mind. Mind is the cause, and the body is the effect. In the world of the body, 2+2=4, in the world of the mind, it’s not so easy.

My go-to example is always about this patient of my psychiatrist father, who studied mathematics, and dreamed the solution of a known ‘unsolvable’ problem on the morning of an exam. He wrote it down for himself, and could not fault the solution, so when the question came up on the test, he wrote it down, and initially received a demerit because of it – he was supposed to know the problem was unsolvable, but eventually, he got other professors to agree with him and in the end his professor changed his grade back to what it should be. That is the world of intuition playing into the manifest world in a very tangible way. Any good doctor can have such experiences with a patient when he intuits, beyond the accepted standard diagnoses, and is able to sense the real issue and work with the patient to address it.

Harnessing AI

We all know somewhat about the ones and the zeros, pushing and popping things on and off the stack (memory), about FIFO and LIFO, and so on. If you’ve played with an HP RPN Calculator, you may have some feel for this. Depending on what tasks you have to perform, it can be quite efficient, but it has limits. Then we got into higher-level languages, and eventually, some languages evolved that were more suited to AI-style development, such as LISP, C#, Smalltalk, Prolog, and others. Next, we got emulated neurons in solid-state chips, as referred to above, which are now starting to appear in consumer devices. With my company BCM Industries, we have designed a family of computers and storage devices that leverage live Neural IT, called TOD™ (Tissue Operating Device), which ranges from 16 million neurons at the low end to 5 billion neurons at the high end, which is supercomputer territory. The storage devices are in Exabyte territory, meaning millions of Terabytes.

In these machines, we can leverage the abilities of actual live neurons directly. This eliminates the need to program certain learned behaviors. We refer to this as NNI (Natural Neural Intelligence), which rests on the fact that natural neurons actually learn behaviors, and through neural networks, can actually learn to analyze very complex problems with vast amounts of data, and can also do it very fast.

While solid-state emulations of neurons are amazing, it is the learning function where natural neurons have the edge. Thinking of the potential for medical applications, we can see how this can be hugely helpful, or a complete death trap. If we proceed with the wrong paradigms, we will have garbage-in, and garbage-out on steroids, but if we leverage these capabilities for research purposes, the sky is the limit.

The experience above with ChatGPT speaks volumes, that system wastes cycles on processing generalized information, to create an impression of informed communication, but it is really not. It is mostly ridiculous. However, in research, or diagnostics, but also in medical imaging, this level of AI is incredibly powerful. In imaging applications, it can potentially use sensor neurons, and be able to process analog images, which means potentially far greater accuracy, since you bypass the granular appearance of a pixelated representation. AI has the potential of vastly expanding a wide range of medical applications, and the challenge will always be to make them helpful to doctors and patients alike.

MANY VOICES, ONE FREEDOM: UNITED IN THE 1ST AMENDMENT

Join our community: Your insights matter. Contribute to the diversity of thoughts and ideas.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Sitewide Newsfeed

More Stories
.pp-sub-widget {display:none;} .walk-through-history {display:none;} .powerpress_links {display:none;} .powerpress_embed_box {display:none;}
Share via
Copy link